Property Checking with Interpretable Error Characterization for Recurrent Neural Networks
نویسندگان
چکیده
This paper presents a novel on-the-fly, black-box, property-checking through learning approach as means for verifying requirements of recurrent neural networks (RNN) in the context sequence classification. Our technique steps on tool probably approximately correct (PAC) deterministic finite automata (DFA). The classifier inside black-box consists Boolean combination several components, including RNN under analysis together with to be checked, possibly modeled themselves. On one hand, if output algorithm is an empty DFA, there proven upper bound (as function parameters) probability language nonempty. implies property holds probabilistic guarantees. other, DFA nonempty, it certain that entails does not satisfy requirement sure. In this case, automaton serves explicit and interpretable characterization error. rely specific specification formalism capable handling nonregular languages well. Besides, neither explicitly builds individual representations any components nor resorts external decision procedure verification. also improves previous theoretical results regarding guarantees underlying algorithm.
منابع مشابه
MinimalRNN: Toward More Interpretable and Trainable Recurrent Neural Networks
We introduce MinimalRNN, a new recurrent neural network architecture that achieves comparable performance as the popular gated RNNs with a simplified structure. It employs minimal updates within RNN, which not only leads to efficient learning and testing but more importantly better interpretability and trainability. We demonstrate that by endorsing the more restrictive update rule, MinimalRNN l...
متن کاملInterpretable Recurrent Neural Networks Using Sequential Sparse Recovery
Recurrent neural networks (RNNs) are powerful and effective for processing sequential data. However, RNNs are usually considered “black box” models whose internal structure and learned parameters are not interpretable. In this paper, we propose an interpretable RNN based on the sequential iterative soft-thresholding algorithm (SISTA) for solving the sequential sparse recovery problem, which mod...
متن کاملInterpretable Neural Networks with BP-SOM
Interpretation of models induced by artiicial neural networks is often a diicult task. In this paper we focus on a relatively novel neural network architecture and learning algorithm, bp-som, that ooers possibilities to overcome this diiculty. It is shown that networks trained with bp-som show interesting regularities, in that hidden-unit activations become restricted to discrete values, and th...
متن کاملPatchnet: Interpretable Neural Networks for Image Classification
The ability to visually understand and interpret learned features from complex predictive models is crucial for their acceptance in sensitive areas such as health care. To move closer to this goal of truly interpretable complex models, we present PatchNet, a network that restricts global context for image classification tasks in order to easily provide visual representations of learned texture ...
متن کاملError Bounds for Approximation with Neural Networks
In this paper we prove convergence rates for the problem of approximating functions f by neural networks and similar constructions. We show that the rates are the better the smoother the activation functions are, provided that f satisses an integral representation. We give error bounds not only in Hilbert spaces but in general Sobolev spaces W m;r ((). Finally, we apply our results to a class o...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Machine learning and knowledge extraction
سال: 2021
ISSN: ['2504-4990']
DOI: https://doi.org/10.3390/make3010010